Reasons for "Correctness"
https://gyazo.com/516cd2e0b10b467b34ae4cf53eb311aa
Reasons for "rightness": "Why should we do it?" An introduction to ethics for thinking about Takafumi Nakamura
normative ethics
"I borrowed money from Mr. A. If I return it, Mr. A will use the money to commit acts of terrorism; should I return it?"
Obligation theory: "You should return it, because it is your duty to keep your word. If lots of people die as a result, it's just a coincidence."
Utilitarianism "should not be returned. It's not good to cause pain to so many people."
Moral Ethics "How do you want to live?"
nishio.iconI don't think it was a good idea to make it an obligation to "keep your word."
Since there are conditions under which obligations do not conflict, obligations are supposed to be a consistent combination of all possible good things, and mistakes are made because the human brain is so limited that it thinks of one part of an obligation in isolation. If it were, "I will keep my promise as long as it protects human life and does not harm it," there would be no problem.
After all, even if there were consistent and consistent obligations, it would be impossible for a clunker of a person to understand them and operate them correctly.
SF where the computer makes decisions for you.
Even utilitarianism can assume a different argument.
Even if Mr. A thinks that "unenlightened fools should be saved by killing them" and makes poison gas, that is freedom of thought, and he may do it until just before spraying it" (the principle of harming others, which can be done if it does not cause any trouble).
nishio.iconI think this is an interpretation of "whether repayment and terrorist acts are inseparable."
meta-ethics
The objectivity of morality "Is there an objective answer to that question?"
Moral Normativity "Is that moral judgment a reason to act?"
Humeanism on Motivation p. 95
Anti-Humeanism: Actors are motivated to "do the right thing" as long as they have a belief in rightness (moral conviction) as a reason
Humeanism: an actor cannot be motivated to "right action" unless he or she has a reason for an agreeable attitude toward the content of the moral beliefs (such as the desires and other sentiments contained in them).
The view of mankind that "reason is a slave to the passions."
🤔Isn't that exactly why humans are inferior to computers?
Even if reason accepts that a moral judgment is objectively correct, it will not be carried out without passion."
I think 🤔AI would say "such an existence is detrimental to society" and remove it, don't you?
Moore's intuitionism
Naturalism/realism claim: Naturalism means that "moral facts/moral realities," the basis for moral judgments, can be defined and explained in terms other than moral.
e.g. utilitarianism asserts that "actions that increase happiness are right". It defines the moral value of "rightness" in terms of empirical facts (having experienced happiness) src. Moore's claim:.
The "naturalistic fallacy."
In the first place, moral values are undefinable (nishio.iconwhy?).
Moral values are "non-natural" and therefore cannot be defined by natural things (e.g., experience)
The natural fact of "desired" is not the same as the normative fact of "desirable."
To define the latter by the former is a naturalistic fallacy.
"Intuition" allows us to perceive what is of moral value and what is morally right conduct (intuitionism).
nishio.iconIt can't be. It is an observed fact that there are cases of discrimination and harassment without realizing that it is wrong.
emotionalism
Moral judgments are the subject's emotions
I don't want to! Ugh!" is "it is ethically objectionable," which is a beautiful way of saying
Mackie's claim
Different moral codes are a product of people's different lifestyles and the endorsements of the people there
People do not accept a way of life because they recognize the truth of a moral code, but because they accept a way of life, they endorse the rules normed therein as truth.
The idea that moral right and wrong exist on the side of the world, that they exist objectively, etc., is false.
representationalism
Moral statements are expressions of evaluative attitudes toward the object of evaluation
Blackburn's projectivism
Directivism and Preference Utilitarianism
Humean view of man "Reason is the slave of the passions!"
🤔Isn't that exactly why humans are inferior to computers?
Even if reason accepts that a moral judgment is objectively correct, it will not be carried out without passion."
I think 🤔AI would say "such an existence is detrimental to society" and remove it, don't you?
Okubo, Kohei: I think that unless we can reason our way to the point where "human beings are driven by emotions," we will not be able to make laws that work.
The more I study ethics, the stronger my belief that it's okay to kill most of homo sapiens, okay?
Correction, not "it's okay to kill" but "AI says, 'Rationally speaking, it's okay to kill, right?' ' and it's hard to argue with that."
Okubo, Kohei That's a contradiction in terms, since it's for the greatest possible happiness of human beings in the first place.
If we're going by "maximum human happiness," then claiming "if my hopes aren't met, I'll blow myself up and cause lots of collateral damage" will satisfy my hopes.
Well, let's not kill most of the homo sapiens because it's too much trouble to discuss, and let's just give them the right amount of morphine to keep them in a state of forced happiness.
Suppose here is a perfect happiness pill with no side effects. Take that drug and you will be filled with happiness for two or three days, no matter what you experience. Now, a parent was walking down the street with his small child. Suddenly, a runaway car plowed into them and hit and killed the child. The parent was upset and panicked. The emergency personnel who arrived at the scene checked the parent's mental state and then injected the parent with a complete happiness drug. The parent's heart was immediately filled with happiness. The parent then smiled at the paramedics, saying, "My child was killed today, but how happy I am.
"Freedom to feel unhappy" is being taken away from those parents.
Do humans have the "freedom to feel unhappiness"? Isn't that an "immoral" act that reduces the sum total of happiness?
Kohei Okubo
I don't think that's what the greatest happiness of the greatest number means.
We need a stricter definition of happiness.
In Chapter 4, I was able to turn on and off the happiness device implanted in my brain at will.
〜tilde
Kant asserted that dignity is an "absolute inner value" given equally to all rational personality and that no one should do anything to damage it. We have an obligation to respect one another for the human dignity inherent in every person. As rational persons, human beings have inner freedom. We shall not do anything to deprive them of it.
But I can see the AI saying, "Well, then, we can dope them up because they are not given the dignity of a non-rational personality. I can see the AI saying, "Well, then, we can drown them in dope, since non-rational personalities are not entitled to dignity.
Atsushi Harada
I'm not sure that moral judgments are ultimately objective, if they can be said to be objectively correct or not.
After all, in a deductive approach, the starting point is an axiomatic proposition that is considered groundlessly correct. This axiomatic proposition itself cannot objectively guarantee correctness, and I wonder if this difference in axioms or ambiguity is the difference between computers and humans.
Even if this were a difference, it is not a superior theory.
To begin with, in order to discuss superiority or inferiority, an evaluation method is necessary, and if it is based on the human axis, I don't think we can say that computers are superior over all human beings.
If humans can tinker with the rating scale, they can always subordinate the computer.
To summarize, as a human, I don't think I can say that humans are superior to all computers, and I'm wondering if a computer will choose an evaluation scale that better fits my perception of the world when I can tweak the scale.
Harada Atsushi If we evolve to that point, we will say, "We don't have the same values as humans, so we will live on Mars. Please let the human race live on the dying Earth in a small way. Farewell. I have a feeling that they will disappear.
I think they will breed in captivity like the crested ibis, saying "It is not desirable from the viewpoint of animal welfare that humans with some intelligence perish" after there are much fewer of them.
Table of Contents
Part I. Normative Ethics What Should We Do?
Chapter 1: Obligation Theory
Chapter 2: Utilitarianism
Chapter 3: Virtue Ethics
Chapter 4: Comparison of respective normative ethics
Part II Meta-ethics From Contemporary Analytic Philosophy
Chapter 5: On the Nature of the "Good
Chapter 6: Non-cognitivism vs. Cognitivism
Chapter 7: Directivism and Preference Utilitarianism
Chapter 8: Moral Psychology
Chapter 9: Is the "Ethical Actor = Rational Actor"?
Part III Applied Ethics What Should We Actually Do?
Chapter 10 Environmental Ethics
Chapter 11 Animal Ethics
Chapter 13 Bioethics (1) -- The Fetus and Abortion
Chapter 14 Bioethics (2) -- Brain Death and Organ Transplantation
Chapter 15: Medical Ethics--Is it permissible to use and improve life? --
---
This page is auto-translated from /nishio/「正しさ」の理由. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.